38 research outputs found

    Future Experimental Improvement for the Search of LNV Process in eμe\mu Sector

    Get PDF
    Exploring the leptonic sector in frontier experiments is more of importance nowadays, since the conservation of lepton flavor and total lepton number are not guaranteed anymore in the Standard Model after the discovery of neutrino oscillations. μ−+N(A,Z)→e++N(A,Z−2)\mu^- + N(A,Z) \rightarrow e^+ + N(A,Z-2) conversion in a muonic atom is one of the most promising channels to investigate the lepton number violation process, and the measurement of this process is planned in future μ−−e−\mu^--e^- conversion experiments with a muonic atom in a muon-stopping target. This paper discusses how to maximize the experimental sensitivity of the μ−−e+\mu^--e^+ conversion by introducing the new requirement of the mass relation of M(A,Z−2)<M(A,Z−1)M(A,Z-2)<M(A,Z-1), where M(A,Z)M(A,Z) is the mass of the muon-stopping target nucleus, to get rid of the background from radiative muon capture. The sensitivity of the μ−−e+\mu^--e^+ conversion is anticipated to have four orders of magnitude of improvement in forthcoming experiments using a proper target nucleus, which satisfies the mass relation. The most promising isotopes found are 40^{40}Ca and 32^{32}S.Comment: 8 pages, 4 figures; Figures, some numbers and a reference in text are modifie

    GPU-Accelerated Event Reconstruction for the COMET Phase-I Experiment

    Full text link
    This paper discusses a parallelized event reconstruction of the COMET Phase-I experiment. The experiment aims to discover charged lepton flavor violation by observing 104.97 MeV electrons from neutrinoless muon-to-electron conversion in muonic atoms. The event reconstruction of electrons with multiple helix turns is a challenging problem because hit-to-turn classification requires a high computation cost. The introduced algorithm finds an optimal seed of position and momentum for each turn partition by investigating the residual sum of squares based on distance-of-closest-approach (DCA) between hits and a track extrapolated from the seed. Hits with DCA less than a cutoff value are classified for the turn represented by the seed. The classification performance was optimized by tuning the cutoff value and refining the set of classified hits. The workload was parallelized over the seeds and the hits by defining two GPU kernels, which record track parameters extrapolated from the seeds and finds the DCAs of hits, respectively. A reasonable efficiency and momentum resolution was obtained for a wide momentum region which covers both signal and background electrons. The event reconstruction results from the CPU and GPU were identical to each other. The benchmarked GPUs had an order of magnitude of speedup over a CPU with 16 cores while the exact speed gains varied depending on their architectures

    Evaluating Portable Parallelization Strategies for Heterogeneous Architectures in High Energy Physics

    Full text link
    High-energy physics (HEP) experiments have developed millions of lines of code over decades that are optimized to run on traditional x86 CPU systems. However, we are seeing a rapidly increasing fraction of floating point computing power in leadership-class computing facilities and traditional data centers coming from new accelerator architectures, such as GPUs. HEP experiments are now faced with the untenable prospect of rewriting millions of lines of x86 CPU code, for the increasingly dominant architectures found in these computational accelerators. This task is made more challenging by the architecture-specific languages and APIs promoted by manufacturers such as NVIDIA, Intel and AMD. Producing multiple, architecture-specific implementations is not a viable scenario, given the available person power and code maintenance issues. The Portable Parallelization Strategies team of the HEP Center for Computational Excellence is investigating the use of Kokkos, SYCL, OpenMP, std::execution::parallel and alpaka as potential portability solutions that promise to execute on multiple architectures from the same source code, using representative use cases from major HEP experiments, including the DUNE experiment of the Long Baseline Neutrino Facility, and the ATLAS and CMS experiments of the Large Hadron Collider. This cross-cutting evaluation of portability solutions using real applications will help inform and guide the HEP community when choosing their software and hardware suites for the next generation of experimental frameworks. We present the outcomes of our studies, including performance metrics, porting challenges, API evaluations, and build system integration.Comment: 18 pages, 9 Figures, 2 Table

    HFSS Simulation on Cavity Coupling for Axion Detecting Experiment

    No full text
    In the resonant cavity experiment, it is vital maximize signal power at detector with the minimized reflection from source. Return loss is minimized when the impedance of source and cavity are matched to each other and this is called impedance matching. Establishing tunable antenna on source is required to get a impedance matching. Geometry and position of antenna is varied depending on the electromagnetic eld of cavity. This research is dedicated to simulation to nd such a proper design of coupling antenna, especially for axion dark matter detecting experiment. HFSS solver was used for the simulation

    Future experimental improvement for the search of lepton-number-violating processes in the eμ sector

    Get PDF
    The conservation of lepton flavor and total lepton number are no longer guaranteed in the Standard Model after the discovery of neutrino oscillations. The μ−+N(A,Z)→e++N(A,Z−2) conversion in a muonic atom is one of the most promising channels to investigate the lepton number violation processes, and measurement of the μ−−e+ conversion is planned in future μ−−e− conversion experiments with a muonic atom in a muon-stopping target. This article discusses experimental strategies to maximize the sensitivity of the μ−−e+ conversion experiment by introducing the new requirement of the mass relation of M(A,Z−2)<M(A,Z−1), where M(A,Z) is the mass of the muon-stopping target nucleus, to eliminate the backgrounds from radiative muon capture. The sensitivity of the μ−−e+ conversion is expected to be improved by 4 orders of magnitude in forthcoming experiments using a proper target nucleus that satisfies the mass relation. The most promising isotopes found are 40Ca and 32S. © 2017 American Physical Society

    Fast DAQ system with image rejection for axion dark matter searches

    No full text
    Abstract A fast data acquisition (DAQ) system for axion dark matter searches utilizing a microwave resonant cavity, also known as axion haloscope searches, has been developed with a two-channel digitizer that can sample 16-bit amplitudes at rates up to 180 MSamples/s. First, we realized a practical DAQ efficiency of greater than 99% for a single DAQ channel, where the DAQ process includes the online fast Fourier transforms (FFTs). Using an IQ mixer and two parallel DAQ channels, we then also implemented a software-based image rejection without losing the DAQ efficiency. This work extends our continuing effort to improve the figure of merit in axion haloscope searches, the scanning rate.11Nsciescopu

    ACTS GPU Track Reconstruction Demonstrator for HEP

    No full text
    In the future HEP experiments, there will be a significant increase in computing power required for track reconstruction due to the large data size. As track reconstruction is inherently parallelizable, heterogeneous computing with GPU hardware is expected to outperform the conventional CPUs. To achieve better maintainability and high quality of track reconstruction, a host-device compatible event data model and tracking geometry are necessary. However, such a flexible design can be challenging because many GPU APIs restrict the usage of modern C++ features and also have a complicated user interface. To overcome those issues, the ACTS community has launched several R&D projects: traccc as a GPU track reconstruction demonstrator, detray as a GPU geometry builder, and vecmem as a GPU memory management tool. The event data model of traccc is designed using the vecmem library, which provides an easy user interface to host and device memory allocation through C++ standard containers. For a realistic detector design, traccc utilizes the detray library which applies compile-time polymorphism in its detector description. A detray detector can be shared between the host and the device, as the detector subcomponents are serialized in a vecmem-based container. Within traccc, tracking algorithms including hit clusterization and seed finding have been ported to multiple GPU APIs. In this presentation, we highlight the recent progress in traccc and present benchmarking results of the tracking algorithms

    Radiation hardness study for the COMET Phase-I electronics

    No full text
    Radiation damage on front-end readout and trigger electronics is an important issue in the COMET Phase-I experiment at J-PARC, which plans to search for the neutrinoless transition of a muon to an electron. To produce an intense muon beam, a high-power proton beam impinges on a graphite target, resulting in a high-radiation environment. We require radiation tolerance to a total dose of 1.0kGy and 1MeV equivalent neutron fluence of 1.0×10 12 neq cm −2 including a safety factor of 5 over the duration of the physics measurement. The use of commercially-available electronics components which have high radiation tolerance, if such components can be secured, is desirable in such an environment. The radiation hardness of commercial electronic components has been evaluated in gamma-ray and neutron irradiation tests. As results of these tests, voltage regulators, ADCs, DACs, and several other components were found to have enough tolerance to both gamma-ray and neutron irradiation at the level we require. c.2019 Elsevier. B.V. All rights reserved
    corecore